MRes Bioengineering students work on their research project throughout the year.   You can apply for one of the projects listed below, or contact your preferred supervisor to discuss a different project

You must name at least one project or potential supervisor in your personal statement when you apply.

Applications will be considered in three rounds. We encourage you to apply in Round 1 or 2 for the best chance to be considered for your preferred project. If you apply in round 3, please consider including a second or third choice project in your application, as some projects may already have been allocated.

Visit the course page for full instructions and deadlines.

Projects available for 2025-26 entry

Professor Anil Bharath

Profile: https://profiles.imperial.ac.uk/a.bharath 

Contact details: a.bharath@imperial.ac.uk

Project title Description
Generative Modelling: Insulin Dose Response Modelling To be set in conjunction with Nick and Pantelis. Lab based
Techniques / technologies - Modelling - machine learning
Computational and theoretical modelling
Dr Chiu Fan Lee

Profile: https://profiles.imperial.ac.uk/c.lee 

Contact details: c.lee@imperial.ac.uk

Project title Description
Cytoplasmic organisation through phase separation Biological cells organise their contents in distinct compartments called organelles, typically enclosed by a lipid membrane that forms a physical barrier and controls molecular exchanges with the surrounding cytosol. Recently an intriguing class of organelles lacking a membrane is being studied intensely. Membrane-less organelles have attracted an intense interest from the biology community as they are present in many organisms from yeast to mammal cells, and are critical for multiple biological functions. For example, P granules are involved in the asymmetric division of the Caenorhabditis elegans embryo, and stress granules assemble during environmental stress and protect cytoplasmic RNA from degradation. Strong experimental evidence indicates that membrane-less organelles are assembled via liquid–liquid phase separation, a common phenomenon in everyday life responsible for example for oil drop formation in water. Under the equilibrium condition phase separation is well understood. However cells are driven away from equilibrium by multiple energy consuming processes such as ATP-driven protein phosphorylation, which can potentially affect the phase-separating behavior of membrane-less constituents. In this project, we will study how these energy-consuming processes affect the dynamics of phase separation in the context of the cell cytoplasm.
Modelling tissue regeneration as fluid flow: Implication for wound healing If you scratch a layer of epithelial tissue, the epithelial cells will have the remarkable ability to proliferate in a coordinated manner to repopulate the scratched area, to the extent that the tissue almost regenerates to the original form. This extraordinary property of regeneration is far from fully understood. Recent advances have revealed that the cells at the edge of the wound are not the only active players in the healing process, rather, cells deep in the bulk exhibit active motility forces [1], and large scale swirling patterns [2]. The highly dynamic nature of tissue under regeneration suggests the possibility of viewing wound healing as a collective movement of a fluid under self-generated stress (due to cell motility) and self-generated pressure (due to cell division). This perspective has recently been investigated using computer simulations which were able to capture some of the salient features of experimental observations [3]. To render the model more realistic, a number of the model assumptions need to be connected to actual biological processes. Further, a comprehensive verification of the model requires derivable predictions that can be validated by experiments. In this project, we will first construct a cell proliferation model by incorporating elements of cell adhesion, motility and proliferation common to epithelial cells. We will then employ simulation methods to study how tissue regeneration would proceed for different types of tissue damages. For instance, we will investigate how the shape of the finger-like protrusions from the edges of the wound varies with the shape of the wound. This project will lead to an in-depth understanding of the biophysical mechanisms behind wound healing, and will equip the student with the computational skills to model cell proliferation in diverse contexts.

References
1. Trepat, X., et al., Physical forces during collective cell migration. Nature Physics, 2009. 5(6): p. 426-430.
2. Angelini, T., et al., Glass-like dynamics of collective cell migration. Proceedings of the National Academy of Sciences, 2011. 108(12): p. 4714-4719.
3. Basan, M., et al., Alignment of cellular motility forces with tissue flow as a mechanism for efficient wound healing. Proceedings of the National Academy of Sciences, 2013. 110(7): p. 2452-2459.
Biophysical Modelling of Giant Vacuole Dynamics in Endothelial Cells  Cells in physiological conditions are often subjected to external mechanical stresses, such as pressure gradients, to which they typically respond by changing their shape. This ability has been shown fundamental for several important biological functions like lumen formation during sprouting angiogenesis [1] or aqueous humor intraocular pressure regulation [2]. In these processes,  endothelial cells are subjected to large hydrodynamic pressure gradients to which they respond by forming fluid-filled membrane invaginations. Around these structures actin filaments and myosin motors are recruited to construct an active contractile shell that sustains the vacuole. At large times the vacuoles are observed to collapse, which is likely caused by the formation of a second pore at the top of the vacuole through which the internal chamber pressure is released. This complex behaviour has been hypothesized to be the characteristic cellular response to large pressure gradients [1], but the biophysical mechanisms underlying it are currently under investigation. In this project, the Candidate will contribute to the formulation of a biophysical model of this phenomenon, called inverse blebbing. Two main research topics are available: (i) understanding the vacuole collapse process through modelling of the membrane pore formation within the framework of active gel theory [3]; (b) understanding the initial invagination dynamics and formulating a suitable stochastic model of it. The Candidate will be required to learn some basics of elasticity theory and to perform both analytical and numerical calculations.    

[1] Gebala V, Collins R, Geudens I, Phng LK, and Gerhardt H  (2016). Nat Cell Biol 18:443.
[2] Pedrigi RM, Simon D, Reed A, Stamer WD, and Overby DR (2011). Exp Eye Res 92:57.
[3] Prost J, Julicher F, and Joanny JF (2015). Nat Phys 11:111-117.  
Biophysical modelling of the pathogenesis of Alzheimer's disease Many human diseases are characterised by the formation of amyloid fibrils  linear aggregates of abnormally folded proteins  among which Alzheimers disease (AD) is a prevalent and particularly morbid one that affects us all. A common signature of amyloid-related pathogenesis is the gradual replacement of healthy tissue by aggregates of amyloid fibrils (e.g., in the form of amyloid plaque (AP) in AD), resulting in the degradation of the functioning of the tissue. Amyloid fibrils are insoluble biopolymers that are robust against proteolysis, and their presence in the forms of extracellular AP and intracellular neurofibrillary tangles are the defining histopathologic features of AD. However, mounting evidence has indicated that proteins in the monomeric form and oligomeric form, instead of proteins in the fibrillar form, are predominantly responsible for cell death. This finding thus raises the conundrum of the role of amyloid fibrils in amyloid pathogenesis: Since monomers have a tendency to self-assemble into amyloid fibrils, fibril formation should be a good way to sequester monomers and oligomers in the system by locking them up into the fibrillar form. Unfortunately, evidence points to the contrary, as demonstrated by the facts that injecting fibrils into transgenic mice induces the onset of amyloid pathogenesis and the well documented cases of Creutzfeldt-Jakob disease resulting from ingesting the misfolded form of prion proteins. In other words, there is a disconnect with the histopathologic characterisation of AD and our understanding of the cell-death inducing mechanism at the molecular level.

Physically, amyloid fibrilisation and AP formation are well-described by the physics of aggregation [1]. Separately, there are well-developed methods to model toxicity-induced cell death. This project thus aims to combine these two distinct fields to further our understanding of Alzheimers disease pathogenesis.

[1] Hong L, Lee C F and Huang Y J 2016 Statistical Mechanics and Kinetics of Amyloid Fibrillation. To appear in Biophysics and biochemistry of protein aggregation, edited by J.-M. Yuan and H.-X. Zhou (World Scientific). E-print: arxiv:1609.01569.

Modelling bone marrow stem cell dynamics in mouse Leukemia cells can be highly motile in the bone marrow environment.  However the exact type of motion executed by a Leukemia cell, be it diffusive, sub-diffusive or super-diffusive, remains unclear.  In this project, with the help of high-resolution and high-frequency imaging data, we will perform an in-depth study of the motility pattern of Leukemia cells in the bone marrow.  We will also incorporate the complex environment into account and investigate how Leukemia cells interact with the diverse components (e.g., bone cells and blood vessels).  The knowledge generated in this project will help us understand the dynamical behaviour of Leukemia cells in the bone marrow, and may provide key insight into how blood cancer develops from few initial malignant cells.

Activities
1. Image analysis of the trajectories of stem cells in the bone marrow and the changes in cell shapes

2. Statistical analysis of how stem cell movement is affected by the bone marrow environment

3. Statistical analysis of how the stem cell interacts with the diverse cell types in the bone marrow

4. Particle simulation incorporating the movement and interactions of the stem cell as well as the various cell types in the bone marrow environment

5. Mathematical modelling of the system by setting a set of partial differential equations

6. Numerical analysis of the mathematical model

Dr Dandan Zhang

Profile: https://profiles.imperial.ac.uk/d.zhang17 

Contact details: d.zhang17@imperial.ac.uk 

Project title Description
Robot Learning for Robotic Manipulation Based on Multi-Sensor Fusion Project Description:
To enhance the manipulation abilities of robots, it is crucial to integrate and interpret sensory information from multiple modalities. The principal modalities — vision and tactile — provide distinct yet complementary data streams. Vision provides spatial and contextual understanding, while tactile feedback offers details about contact forces and surface textures.
This MSc project aims to explore multimodal representation learning, targeting the development of integrated sensory processing methods that enhance the efficiency and performance of reinforcement learning for dexterous tasks with multimodal perception data as input. By developing a unified representation of visual and tactile data, the project will create a foundation for robotic systems to execute complex manipulation tasks with greater awareness and precision.

Project Goals:
i) Develop methods for fusing visual and tactile data, ensuring that the resulting representation captures the salient features from both modalities.
ii) Utilize deep learning techniques to encode high-dimensional raw sensory data into a lower-dimensional, task-relevant space; and explore different architectures and frameworks for learning a joint multimodal representation that is informative for a contact-rich robotic manipulation task.
iii) Implement reinforcement learning algorithms that can efficiently utilize the learned multimodal representations to make informed decisions.
iv) Conduct experiments to test and validate the approach on physical robotic systems and evaluate the performance gains achieved by leveraging multimodal data compared to unimodal approaches.

Required Skills:
i) Proficiency in machine learning frameworks such as PyTorch or TensorFlow, with experience in implementing deep neural networks.
ii) Practical understanding of robotic manipulation and the various sensing modalities involved.
iii) Basic knowledge of reinforcement learning algorithms and experience with applying them to control tasks.

Expected Outcomes:
i) A sophisticated learning framework that effectively fuses vision and tactile sensory data for improved robotic manipulation.
ii) A series of experiments demonstrating the efficacy of multimodal representation learning in real-world scenarios and an assessment of the impact of multimodal data on reinforcement learning performance, particularly in terms of efficiency and robustness.
iii) A final report detailing the methodologies, experiments, results, and an in-depth discussion of the findings, including potential avenues for future research.
Immersive Teleoperation for Robot-Assisted Microsurgery using Digital Twin and Mixed Reality Project Description:
This MSc project will explore the integration of a Digital Twin (DT) framework with Mixed Reality (MR) technology to create a novel and immersive teleoperation environment for RAMS. The convergence of DT and MR aims to provide an intuitive and interactive platform that enhances the surgeon's perception and interaction with the surgical field, thus increasing the precision, safety, and efficiency of microsurgical procedures.
The project will involve the development of a DT that accurately simulates the behavior of both the surgical environment and the microrobotic instruments. The MR component will allow surgeons to visualize and manipulate a digital representation of the microrobot in three dimensions, overlaid with real-time 2D microscopic images. This hybrid visualization is expected to enhance the surgeon’s cognitive and spatial awareness, allowing for more precise and safer manipulation of tissues.

Project Objectives:
i) To create a detailed DT model that simulates the surgical environment, including tissue properties and robotic instrument dynamics; enable real-time updating of the DT based on sensory feedback and surgical interactions.
ii) To develop an MR interface that presents a 3D visualization of the digital microrobot and 2D microscopic images in real-time, enhancing the operator's immersion and depth perception.
iii) To ensure seamless integration between the DT and MR components for a coherent and responsive teleoperation experience; design intuitive user controls and establish protocols for user interaction that prioritize ergonomic comfort and reduce the likelihood of operator fatigue.
iv) To validate the MR teleoperation framework through extensive testing, including task performance metrics such as precision, speed, and error rates; conduct usability studies with potential end-users to gather feedback and further refine the system.

Required Skills:
i) Basic understanding of virtual reality and mixed reality technologies.
ii) Basic skills in simulation and modeling, with the ability to create accurate digital twins of physical systems.
iii) Proficiency in programming languages suitable for robotics and simulation (e.g., Python, C#).

Expected Outcomes:
i) Create a simulation environment that can be used for microsurgical training and pre-surgical planning.
ii) Develop a working prototype of a DT-driven MR teleoperation framework for RAMS and conduct a series of tests to evaluate the effectiveness of the system in improving surgical precision and safety.
iii) Write a report to document the entire design and development process, including any user studies, feedback, and iterations of the system.
Teleoperated Microsurgery with Human-Robot Cooperative Control and Haptic Guidance Project Description:
Robot-assisted microsurgery is a field where the precision and stability of robotic systems can significantly enhance human capabilities. Teleoperation can provide surgeons with advanced tools to perform highly delicate tasks. However, to further improve the efficiency and safety of these surgeries, there is a need for an optimized human-robot cooperative control model that ensures intuitive operation, precision, and appropriate levels of autonomy when needed.
This MSc project will focus on the development of a leader-follower control framework that assists the surgeon in a cooperative manner. The proposed system will allow for fine manual control during critical phases of the surgery while offering the option for automated, accelerated movements when transitioning to less delicate tasks. Additionally, the system will integrate haptic feedback, providing tactile cues that guide the operator through the procedure, which is particularly beneficial for training purposes and enhancing the skills of novice surgeons.

Project Objectives:
i) To design an optimized leader-follower control model that allows for seamless switching between manual and automated control.
ii) To incorporate haptic guidance into the control model, giving the operator force feedback corresponding to the surgical environment.
iii) To employ learning from expert demonstration algorithms to capture and replicate expert surgeon's techniques and use the learned models to provide predictive haptic guidance, aiding in training and operation consistency.
iv) To simulate the teleoperated microsurgical environment and validate the control model and haptic feedback system; conduct experiments within the simulation to refine the control parameters and the machine learning models.

Required Skills:
i) Basic knowledge of machine learning and artificial intelligence principles.
ii) Proficiency in programming languages such as Python and C++, with experience in software development for robotic systems.
iii) Familiarity with simulation tools and environments relevant to robotics and teleoperation.

Expected Outcomes:
i) Develop a functional human-robot cooperative control framework tailored for microsurgery applications; integrate a haptic guidance system that is informed by machine learning models based on expert demonstrations.
ii) Demonstrate the effectiveness of the system through simulation, showcasing improved operation efficiency and training potential.
iii) Compile a comprehensive report detailing the design process, system architecture, simulation results, and potential for real-world application.
Robotic Tactile Palpation for Tumor Detection  Project Description:
Tumor detection through palpation is a critical skill in various surgical disciplines, allowing surgeons to identify abnormal masses within tissue. In robotic surgery, where direct tactile feedback is not available, the ability to detect tumors by feel is significantly compromised. This MSc project seeks to address this limitation by developing an advanced tactile palpation system that can be integrated into surgical robots, providing surgeons with the necessary haptic feedback to detect tumors effectively.
The project will involve the design and fabrication of tactile sensors that can mimic the sense of touch a surgeon uses to differentiate between normal and abnormal tissues. The student will work on creating a system that processes the tactile data and converts it into palpable feedback, enhancing the surgeon's ability to detect tumors during minimally invasive robotic procedures.

Project Objectives:
i) To investigate and develop advanced tactile sensors that can discern differences in tissue stiffness, texture, and other mechanical properties indicative of tumor presence; integrate these sensors into a robotic end-effector suitable for minimally invasive surgery.
ii) To create algorithms that can process tactile data in real-time and identify patterns consistent with tumor tissue; utilize machine learning techniques to enhance the sensor system's accuracy and sensitivity in tumor detection.
iii) To develop a haptic feedback system that can accurately convey the sensation of tumor tissues to the surgeon's fingertips; fine-tune the feedback system to represent different textures and consistencies as closely as possible to natural palpation.
iv) To develop an intuitive user interface that allows surgeons to interpret tactile data efficiently and validate the system's performance through benchtop experiments with tissue phantoms.

Required Skills:
i) Basic skills in sensor development, particularly with applications in detecting mechanical properties of soft tissues.
ii) Proficiency in programming for data processing and machine learning, particularly in Python or similar high-level languages.
iii) Basic understanding of robotic systems and integration, including hardware and software aspects.

Expected Outcomes:
i) A prototype tactile palpation system with tumor detection capabilities.
ii) A software suite for processing and pattern recognition of tactile data.
iii) A report for introduction of system integration, data analysis, and a set of experimental results demonstrating the system's detection capabilities.


Design of a Robotic Hand with Tactile Feedback for In-Hand Manipulation Project Description:
In-hand manipulation involves adjusting the position and orientation of an object within the palm and fingers. The aim of this MSc project is to design and prototype an affordable robotic hand that not only mimics the complex motions required for in-hand manipulation but also integrates tactile feedback to enhance manipulation capabilities. This project requires the student to push the boundaries of current robotic hand designs by incorporating tactile sensors and developing algorithms to interpret and respond to tactile data, thus enabling the hand to perform delicate and skillful in-hand manipulations.

Project Goals:
i) To design a dexterous robotic hand with articulations that reflect the degrees of freedom necessary for in-hand manipulation; employ cost-effective manufacturing processes and materials, potentially including 3D printing, to construct the hand.
ii) To integrate affordable tactile sensors that provide critical feedback for in-hand manipulation, such as slip detection and pressure distribution; map sensor placements effectively across the hand, focusing on areas that maximize manipulation control.
iii) To leverage open-source electronics and software platforms to manage costs and encourage community collaboration and innovation; enable the construction of a compact actuation and control system that can process sensory input and coordinate the actuation of the hand with precision and responsiveness.
iv) To construct a working prototype that embodies the designed features and can be tested for functional capabilities; conduct a comprehensive series of tests that assess the in-hand manipulation skills of the robotic hand, including object reorientation, precision lifting, and controlled release.

Required Skills:
i) Proficient knowledge of mechanical engineering principles and experience with 3D CAD modeling.
ii) Experience with sensor technology and skills in electronic circuit design and familiarity with microcontroller programming and interfacing.
iii) Practical experience in prototype building, including familiarity with rapid prototyping techniques.

Expected Outcomes:
i) A functioning prototype of an affordable robotic hand capable of in-hand manipulation with tactile feedback.
ii) A set of test results demonstrating the hand's manipulation capabilities and the effectiveness of the tactile feedback system.
iii) A final project report detailing the design process, control algorithms, technical challenges, performance evaluation, and recommendations for future development.
Data Efficient Machine Learning for Tactile Sensor Array-Based Robotic Grasping Perception Supervisor: Dandan (Dian) Zhang
Co-Supervisor: Etienne Burdet, Alexis Devillard

Project Description
Robotic grasping and manipulation are critical capabilities for automation in various applications. The integration of tactile sensors with machine learning enhances the perception abilities of robotic hands, allowing for precise and adaptive interaction with objects. This project focuses on developing data-efficient machine learning models that leverage tactile sensor array data to improve robotic grasping perception, encompassing object recognition and force distribution estimation with minimal data requirements. This project aims to advance the tactile sensing capabilities of robotic hands through the development of data-efficient machine learning models. By achieving high performance in object recognition and force estimation with minimal data, this research will contribute significantly to the field of robotic manipulation, enabling more efficient and adaptable robotic systems.

Methodology
The project involves a thorough review of current technologies in tactile sensing, machine learning for tactile data, and data-efficient learning methods. A dataset will be gathered from tactile sensors on various objects and pre-processed for quality enhancement. Machine learning models tailored for data efficiency, such as GNNs, will be developed, utilizing techniques like transfer learning, few-shot learning, and data augmentation to minimize data requirements. Tactile sensor data will be seamlessly integrated into the models, addressing challenges like managing high-dimensional data and maintaining real-time processing. The models will be trained using data-efficient algorithms and optimized through hyperparameter tuning, regularization, and pruning. Performance will be evaluated using metrics such as accuracy, precision, recall, F1 score, MSE, RMSE, etc. Finally, the models will be deployed in real-world grasping and manipulation scenarios to test their effectiveness and adaptability.

Expected Outcomes
• Development of data-efficient machine learning models for accurate object recognition and force estimation using tactile sensor data.
• Enhanced perception capabilities of robotic hands, leading to improved performance in grasping and manipulation tasks.
• Comprehensive performance data demonstrating the models' effectiveness in various applications.
• Potential for broad applications in fields requiring precise robotic manipulation

Required Skills:
• Software: Tools for data pre-processing and analysis (e.g., Python libraries)
• Machine Learning Frameworks: TensorFlow, PyTorch
• Data Collection Tools: Robotic arm operation
Enhancing Bimanual Robot-Assisted Microsurgery with AI-Driven Cooperative Control and Haptic Feedback Project Description:
This project focuses on developing an advanced teleoperated bimanual robot-assisted microsurgical system that integrates machine learning to facilitate efficient human-robot cooperative control. The goal is to simulate a natural surgical environment where the precision of robotic arms is enhanced by machine learning algorithms. The project will explore the dynamic integration of the surgeon's manual inputs and the robot's autonomous actions to optimize surgical outcomes. By implementing demonstration learning techniques, the project aims to encapsulate and replicate expert surgical maneuvers. Furthermore, the integration of a comprehensive haptic feedback mechanism aims to restore tactile sensations lost in conventional teleoperated systems, providing surgeons with a more immersive and precise control over surgical instruments.
Project Plan:
• Milestone 1: Review existing technologies in teleoperated microsurgical systems and identify limitations in current cooperative control and haptic feedback implementations.
• Milestone 2: Design and simulate the leader-follower control algorithm, incorporating initial machine learning models based on demonstration learning.
• Milestone 3: Develop and integrate the haptic feedback system, ensuring real-time transmission of tactile data to the surgeon.
• Milestone 4: Enhance machine learning models with real surgical data, focusing on improving prediction accuracy and system responsiveness.
• Milestone 5: Conduct extensive testing in simulated environments, followed by validation with real-world data.
• Student-Led Explorations: Students can explore different machine learning algorithms for optimizing control strategies, develop novel haptic feedback mechanisms, or investigate the impact of varying surgical scenarios on system performance.
Prerequisites (Student Profile):
• Essential Skills: Proficiency in programming, strong foundations in machine learning and control systems, experience with robotics simulation tools (e.g., ROS, Gazebo).
• Courses: Robotics, Machine Learning, Control Systems, Human-Computer Interaction, Statistics.
Human-in-the-Loop Control for a Tactile Sensor-Based Contact-Rich Manipulation Task Contact-rich manipulation tasks, such as assembling intricate components or handling delicate objects, pose significant challenges for autonomous robotic systems due to the complexity and variability of tactile interactions. This project aims to develop a human-in-the-loop control system for a robotic arm equipped with tactile sensors, enabling precise and adaptive manipulation through real-time human guidance and feedback.
This project aims to enhance robotic manipulation in contact-rich tasks through the development of a human-in-the-loop control system with integrated tactile sensors. By leveraging human intuition and tactile feedback, the system will improve the precision and effectiveness of robotic arms in complex manipulation scenarios, with broad applications in various industries.


Expected Outcomes
• A fully functional teleoperation system with integrated tactile sensors for robotic manipulation.
• Enhanced precision and adaptability in contact-rich manipulation tasks through effective human-robot collaboration.
• Comprehensive performance data demonstrating the system’s effectiveness in various applications.
Required Skills:
• Hardware: Robotic arm, haptic feedback devices
• Software: Control algorithms, user interface development tools
• Sensors: Vision-Based Tactile Sensor
Dr David Labonte

Profile: https://profiles.imperial.ac.uk/d.labonte 

Contact details: d.labonte@imperial.ac.uk

Project title Description
Principal component analysis to study the effects of leg loss on insect locomotion We will develop a novel methods to study 3D kinematics of walking animals, based on principal component analysis. 
How far can you throw a ball (and thus how fast can animals run?)? Understanding the physical limits to muscle performance is of obvious interest. A classic perspective tells us that each unit of muscle can do a fixed amount of work, and this fixed "work density" determines the speed of animals at different sizes. But perhaps there are other limits to performance that are often overlooked?

One such limit stems from the maximum shortening speed of muscle. If this limit is reached before muscle has delivered its maximum work capacity, it is more relevant. What determines which limit is reached first?

A simple toy experiment to explore this idea is to test how fast humans can throw objects of different mass: if work matters, the speed should vary in direct proportion to the mass of the object; if shortening speed matters, it should be independent of mass. Clearly, for a large enough mass range speed eventually decreases - but it also seems intuitive that throw a ping pong ball about as fast as a tennis ball. Clearly, a thorough experimental approach is needed and must be complemented by sound theoretical mechanical analysis - both is the task of this project. How far can humans throw balls of different mass?
Design and building of an experimental environmental chamber Many experiments need to be conducted in controlled environmental conditions (temperature, humidity, etc). However, control chambers are often prohibitively expensive. The aim of this project is to design and build a simple and cheap system using off-the-shelf parts and microcontrollers, and to document this design such that it can be shared with others - science is best when it is open. The chamber should allow variation between 5-45 C, and between 10-90% relative humidity.
Mechanical properties of insect apodemes We will measure the mechanical properties of the insect-equivalent of tendons: apodemes.
Dr Hayriye Cagnan

Profile: https://profiles.imperial.ac.uk/h.cagnan 

Contact details: h.cagnan@imperial.ac.uk

Project title Description
Dual-site transcranial alternating current stimulation for tremor control Involuntary shaking is a common symptom of Parkinson’s Disease and Essential Tremor, affecting around one million people in the UK. This project aims to leverage plasticity—brain’s ability to adapt and change—for therapeutic purposes by delivering well-timed electrical inputs to key regions across the tremor network. Based in the Cagnan lab, the focus will be on piloting dual-site stimulation of the motor cortex and cerebellum to achieve longer-lasting therapeutic benefits for tremor patients. Your role will include (1) modelling the volume of tissue activated during dual-site stimulation, (2) developing and testing closed-loop control algorithms and (3) developing approaches for efficient optimisation of stimulation parameters.

We are looking for a student with strong skills in engineering, instrumentation, and programming, along with a background in neuroscience.

1. Schwab BC, König P, Engel AK. Spike-timing-dependent plasticity can account for connectivity aftereffects of dual-site transcranial alternating current stimulation. NeuroImage. 2021;237:118179. doi:10.1016/j.neuroimage.2021.118179
2. Schwab BC, Misselhorn J, Engel AK. Modulation of large-scale cortical coupling by transcranial alternating current stimulation. Brain Stimulation. 2019;12(5):1187-1196. doi:10.1016/j.brs.2019.04.013
3. Saturnino GB, Madsen KH, Siebner HR, Thielscher A. How to target inter-regional phase synchronization with dual-site Transcranial Alternating Current Stimulation. NeuroImage. 2017;163:68-80. doi:10.1016/j.neuroimage.2017.09.024
4. Fleming JE, Sanchis IP, Lemmens O, et al. From dawn till dusk: Time-adaptive bayesian optimization for neurostimulation. PLOS Computational Biology. 2023;19(12):e1011674. doi:10.1371/journal.pcbi.1011674
5. Cagnan H, Pedrosa D, Little S, et al. Stimulating at the right time: phase-specific deep brain stimulation. Brain. 2017;140(1):132-145. doi:10.1093/brain/aww286
Modulatory role of transcranial stimulation on cognitive control  Everyday decision-making depends on our ability to adapt and sometimes stop actions unexpectedly. This skill can range from something as simple as resisting a tempting slice of cake to something as critical as hitting the brakes in an emergency. Cognitive control can be compromised in a range of neuropsychiatric disorders and remains difficult to restore using invasive and non-invasive brain stimulation techniques. We previously targeted the medial prefrontal cortex, a key brain region involved in response inhibition, using transcranial electrical stimulation to modulate neural rhythms and associated behaviors.

This project, based in the Cagnan lab, will focus on (1) data analysis of electrophysiological and behavioral responses, (2) stimulation artifact removal, and (3) modeling behavioral and electrophysiological data.

We are looking for a student with strong signal processing skills and a background in neuroscience.

Tuning the brakes – Modulatory role of transcranial random noise stimulation on inhibition
Mandali, Alekhya Torrecillos, Flavie ... Cagnan, Hayriye et al.
Brain Stimulation: Basic, Translational, and Clinical Research in Neuromodulation, Volume 17, Issue 2, 392 - 394
Phase Transitions in Circadian Tremor Patterns Involuntary shaking is a common symptom of Parkinson’s Disease (PD), affecting approximately 150,000 people in the UK. Tremor in PD can be triggered or influenced by various factors throughout the day (e.g., stress or medication intake), making it crucial to identify trends that are key to effective clinical management.

The project will take place in the Cagnan Lab and will involve analysing a unique dataset consisting of long-term (2 years) recordings of PD patients collected via wearable sensors in free-living conditions. Tremor events in this dataset have already been identified through machine learning algorithms. With this project, we will explore the circadian dynamics of tremor in PD over several days using recurrence quantification analysis (RQA), a nonlinear data-driven technique that provides objective markers for regularity, trends, and phase transitions in time series data. A particular focus will be on the impact of patients’ medication schedule changes on these dynamics.

We are seeking a motivated student with programming and signal processing skills who is eager to deepen their understanding of the circadian progression in neurodegenerative disorders.
Sleep Fragmentation in Parkinson’s Disease and its impact on tremor  Sleep fragmentation, characterised by frequent awakenings or disruptions, has a significant impact on daytime functioning, leading to increased fatigue, reduced motor control, cognitive decline, and heightened stress and anxiety. In Parkinson’s disease (PD), sleep fragmentation can diminish a person’s ability to manage and compensate for daily tremors, worsening their symptoms' severity and duration. While recent evidence supports this connection, a systematic and comprehensive study with a representative PD cohort and long-term follow-up is still lacking.

This project aims to investigate sleep fragmentation from multiple angles and assess how it affects the severity and duration of daily tremors in PD patients. The research will be conducted in the Cagnan Lab, utilising a unique dataset containing two years of long-term recordings from PD patients in free-living conditions, with tremor events already identified by machine learning algorithms.

We are looking for a motivated student with strong programming and signal processing skills who is eager to better understand the relationship between sleep fragmentation and Parkinson’s disease symptoms.
Dr James Choi

Profile: https://profiles.imperial.ac.uk/j.choi 

Contact details: j.choi@imperial.ac.uk

Project title Description
Visualising Sound using Machine Learning and/or Signal Processing Algorithms Purpose. The purpose of this project is to develop a deep neural network or beam forming algorithms that can reconstruct the location of acoustic sources using multiple microphones.

Motivation. In therapeutic ultrasound, a focused ultrasound transducer is used to concentrate energy to a point in the body, allowing us to noninvasively and locally manipulate tissue (tumour ablation, drug release from acoustically-active particles, etc). Our laboratory developed therapeutic ultrasound devices for delivering drugs to the brain (across the blood-brain barrier) for the treatment of brain cancers, neurodegenerative diseases, and other neurological conditions. However, the success or failure of the technique has been difficult to track as clinicians are unable to directly observe what is happening within the body. An emerging way of monitoring this procedure is with the use of microphones located around the focused ultrasound transducer. Sound generated during the procedure are captured by the microphones. We then reconstruct an image of the treated area using passive beamforming algorithms. The reconstruction of a signal source based on multiple sensor signals is broadly known as beamforming. In addition to medical imaging, it is used in underwater acoustics, astronomy, and other disciplines. The problem with many existing passive beamforming algorithms is the poor spatial resolution in the reconstruction of the sound sources. This means we can't precisely locate where the source is coming from.

The purpose of this project is to develop a deep neural network and/or signal processing methods that can reconstruct an image of the treated region with better accuracy and spatial resolution.

Work description. This work will involve generating training data using computer simulations on a Matlab toolbox known as k-wave. We will then train the deep neural network on PyTorch or develop fundamental signal processing algorithms. We will explore conventional neural networks such as convolutional neural networks, recurrent neural networks, and others; and, potentially, more advanced techniques, such as transformers and physics-inspired neural networks.
Optical hand tracking using machine learning Purpose. Implement optical hand tracking using machine learning, analyse speed and bottlenecks, and explore ways of improving existing methods.

Motivation. Optical hand tracking is a method used in virtual reality, augmented reality, and human-machine interfacing as it allows the user to interact with virtual environments and communicate with robots and machines in a natural way. However, optical hand tracking has not been able to achieve widespread adoption due to limitations in speed and precision, and a constant breaking of the immersive experience.

The purpose of this project is to analyse existing optical hand tracking methods and quantify their speed, precision, and failure rates; and explore ways of improving the optical hand tracking performance.

Work. The student will setup their own optical hand tracking setup using a camera (e.g., a webcam) and write his/her own optical hand tracking method from scratch using python and PyTorch. The student will then improve the algorithm using the state-of-the-art published algorithms and quantify the speed, precision, and failure rates of all of these methods. The student will evaluate the bottlenecks in each of these categories. For example, what is the physical, hardware, or computational reasons for these limitations. Certainly the speed of light is fast and so is not constraining the speed of calculations. Perhaps it's the two-step process of identifying where the hand is in the image and the subsequent steps of identifying where the hand joints are located? Is the constraint due to the hardware's calculation speeds? We will explore these questions and many others. The students will learn how to approach a common machine learning problem with the deep analytical abilities of an academic researcher.
Work.
Ultra-High-Speed Video Camera at 10 Million Frames per Second Purpose: Develop an ultra-high-speed video camera at 50 million frames per second.

Motivation: Certain phenomena, such as ultrasound imaging and therapy with microbubbles (contrast agents) operate in the MHz rate. Such fast dynamics cannot be captured using traditional cameras, which operate at around 60 Hz. Commercially available cameras can reach 1 million frames per second, which is still not enough. And while a 10 million frames per second camera is available on the market for £200k, that camera cannot capture more than 256 frames (25.6µs of data). We propose a new video camera concept that could reach up to 50 million frames per second, capable of capturing nearly unlimited amounts of frames. By creating this device, we would be able to observe phenomena in biological tissue that no one has been able to observe.

Outside of the domain of biomedical engineering, this camera could be used to image plasma in fusion reactors, high-speed objects in space, and other high-speed phenomena that requires incredibly high frame rates.

Work: This project requires electrical engineering skills. The students would be asked to make circuits that are connected to a unique sensor array. If the analog circuit is successful, we would then require some digital electronics skillsets and optics (physics).
Microfluidic Devices for Engineering Advanced Microparticles for Noninvasive Surgery Purpose: To develop microfluidic devices to engineer advanced microparticles that can be controlled noninvasively with focused ultrasound devices.

Motivation. The vision for noninvasive surgery is to manipulate and probe tissue deep in the body without having to cut open the body. Dr. Choi's laboratory develop noninvasive ultrasound devices that emit and receive sound from the patient's surface. We are working with Dr. Au's laboratory to create particles that our devices could manipulate. Here, we ask the student to develop a microfluidic platform to create advanced microparticles that our noninvasive devices could manipulate. In particular, we would like to design microbubbles to address one of the greatest medical challenges of our time - treating brain disorder. Drugs developed to treat brain disorders, such as Alzheimer's disease are untreatable, not because great drugs aren't available, but because those drugs cannot cross the brain's blood vessels, which is lined by a blood-brain barrier. Using engineered microbubbles remotely controlled by ultrasound, we can open the blood-brain barrier, finally allowing drugs to enter the brain.

The work. Build microfluidic devices. This includes working in a cleanroom. You also may be exposed to working with ultrasound devices, so strength in engineering and physics would be helpful.
Professor Jimmy Moore

Profile: https://profiles.imperial.ac.uk/james.moore.jr 

Contact details: james.moore.jr@imperial.ac.uk

Project title Description
Lymph node physiological mass transport Many important immune processes occur in lymph nodes, but we actually know little about their structure. This is important because it determines patterns of mass transport which assure that antigens are presented to the appropriate cells.  Based on a combination of high-resolution imaging protocols that quantify the structure of human lymph nodes, we will construct models of mass transport of chemokines and antigens in collagen bundle conduits, parenchyma and blood vessels.
MRI phantom for testing lymphatic vessel imaging sequences There are no means for measuring pressure, flow or diameter reliably in any lymphatic vessel without surgical exposure.  While the vessel diameters are on the order of MRI spatial resolution, their unique flow dynamics profile offers opportunities to develop diffusion-based imaging sequences to distinguish them from blood vessels and interstitial fluid movement.  We aim to develop a phantom that reproduces these flow patterns to aid in the development of better imaging sequences.
Stem Cell Injection Device for Minimising Shear-Induced Cell Lysis While there is great potential in using cells as part of therapeutic strategies for many diseases, these strategies are limited by the survivability of cells during the injection process.  We have designed a combination hydrogel and syringe injection system that aims to maximise cell viability.  The project will involve making different hydrogel formulations and testing their mechanical properties.  This information will be used to determine the details of the syringe design, which will be tested computationally and experimentally.
Dr Juan Gallego

Profile: https://profiles.imperial.ac.uk/juan-alvaro.gallego 

Contact details: juan-alvaro.gallego@imperial.ac.uk 

Project title Description
Advancing game control by decoding the users intent with surface electromyography  Surface electromyography (sEMG) is a non-invasive neural interfacing technique which can  predict the motor intent arising from brain's higher-level processing before the body's physical output. This is achieved by simply detecting electrophysiological activity from muscles at the skin surface. These recordings are then effectively reverse engineered to identify single motor unit discharge patterns, which encode movement information [1][2][3].  One could use this information to control external devices via a non-invasive neuro-muscular interface, to achieve brain-computer interaction or human augmentation. Recently, sEMG for neural interfacing has gained attention from industry, beyond its conventional clinical and research use, whereby neural activity is mapped into real-time myo-control paradigms for augmented experiences with gaming and consumer electronics [4][5].
To explore the complete interfacing process from recording to signal processing to encoding, this multi-faceted project will entail designing and implementing a neuromuscular interface for motion/force control of a simple game such flappy bird, ping-pong, snake etc. There are two branches to the project implementation:
1. Interface design combined with sEMG signal recording and processing, targeting self-selected forearm muscles.
2. Algorithm design for the mapping of neural activity into a myo-control paradigm, and a system to transfer information to the chosen/designed gaming platform.
We plan to make use of your resultant set-up for future control experiments in the lab, as well as give you the opportunity to partake in future outreach activities, teaching the public about the exciting and diverse applications of sEMG.

References
[1] D. Farina and A. Holobar ., Characterization of Human Motor Units from Surface EMG Decomposition, Proceedings of the IEEE, vol. 104, pp. 353373, 2016
[2] A. Holobar and D. Farina ., Non-invasive neural interfacing with wearable muscle sensors: Combining convolutive blind source separation methods and deep learning techniques for neural decoding IEEE Signal Processing Magazine, vol. 38, no. 4, pp. 103-118, July 2021
[3] D. Farina et.al , Decoding the neural drive to muscles from the surface electromyogram, vol. 121, pp. 1616“1623. Clin Neuro- physiol, 2010
[4] E.F. Melcer et.al, CTRL-labs: Hand Activity Estimation and Real-time Control from Neuromuscular Signals, Conf.Hum.Factors Comput.Syst. -Proc.,2018-July pp.1-4, 2018
[5] T. Sharp et.al, Accurate, robust and flexible real time hand tracking, Conf. Hum. Factors Comput. Syst. "Proc., vol. 2015-April, pp. 3633-3642, 2015.

 

Professor Mengxing Tang

Profile: https://profiles.imperial.ac.uk/mengxing.tang 

Contact details: mengxing.tang@imperial.ac.uk 

Project title Description
Functional ultrasound brain imaging using ultrasound Accelerating 3D ultrasound brain imaging with machine learning

Background
There currently exists no brain imaging modality that provides high-resolution images, is portable, cheap and generally safe. Existing modalities such as MRI and CT are characterised by the use of expensive and non-portable equipment MRI cannot be applied in the presence of ferromagnetic objects and CT uses ionising radiation.
Recently, ultrasound-based imaging of the brain has been proposed as a promising alternative, particularly for imaging brain funcational activities through changes to local blood flow. Ultrasound has a much higher sensitivity to subtle blood flow changes than other  modalities.
Objectives
This project intends to explore ways in which 2D/3D functional imaging of the brain can be achieved using non-invasive ultrasound on small animal models first, with the view of extending it to human in the future.
Tracking and correcting tissue motion using machine learning for Super-Resolution Ultrasound microvascular imaging Tissue motion has been one of the biggest challenge in medical imaging. It also plays a key role in generating super resolution images of microvasculature. Microvasculature morphology is linked to the regulation of blood perfusion and tissue remodelling e.g., wound healing, carcinogenesis, plague formation or blood glucose removal. Measurement of these structures with high spatiotemporal resolution is consequently useful in understanding the underlying biomechanical processes. Super-resolution ultrasound localization microscopy can non-invasively visualize microvasculature beyond the diffraction limit to create super-resolved images of microvascular structures at microscopic level. For this to work, motion must be corrected.

In this project you will learn existing motion correction methods and codes, and compare their performance on both simulation data and experimental data.

Your task:
• Learn the principle and codes for existing algorithms of image motion correction.
• Generate/identify data sets suitable for evaluation of tissue motion correction
• Evaluate and compare the performance of existing motion correction algorithms

As a student you should have knowledge in any program language. Good organisational skills and competence in documentation are very important.

What you will learn:
• Ultrasound super-resolution imaging.
• Understand the concept of motion correction and existing methods/algorithms.
• Quantitative evaluation of algorithm performance

If you are interested in this project, please email Prof. Tang: Mengxing.tang@imperial.ac.uk
Sensing and imaging blood flow in the brain using ultrasound Non-invasive measurement of blood flow velocity is important in a wide range of clinical applications. E.g. neurological patients with a peak cerebral flow velocity of over 2m/s would require intervention. However the current transcranial Doppler ultrasound has limitations as it requires significant operator experiences. This has largely limited the broad application of such technique in clinical settings. In this project, we aim to develop a technique that can measure blood flow velocities within a large volume  using ultrasound so that the outcome will be much less dependent on operator experiences.

The project involves the understanding of biomedical ultrasound principles, computer simulation, signal processing, and some experimental evaluation in the later phase of the project. Anyone with an interest in experiments, data/signal processing and simulation/modelling can apply.     
Dr Muhammad Usman

Profile: https://profiles.imperial.ac.uk/m.usman 

Contact details: m.usman@imperial.ac.uk

Project title Description
Machine learning based quiescent phase detection for breath-hold cardiac magnetic resonance imaging In many applications of cardiac imaging including T1/T2 mapping, late gadolinium enhancement and coronary magnetic resonance angiography, the timing and duration of data acquisition window play a major role in getting high quality images. Imprecise timing or duration of acquisition window can result in cardiac images corrupted with blurring or ghosting artefacts due to allowing too much intra-frame motion. A framework is needed where the acquisition window can be optimally adjusted irrespective of patient type or heart rate. In this project, machine learning methods will be applied to breath-hold cardiac CINE imaging to generate low dimensional manifolds representing cardiac motion. The low dimensional manifolds will be processed automatically to detect the most quiescent period within the cardiac cycle. 
Dr Pedro Ballester

Profile: https://profiles.imperial.ac.uk/p.ballester 

Contact details: p.ballester@imperial.ac.uk

Ballester group page - https://ballestergroup.github.io/

 

Project title Description
Searching for molecules with potent and selective PLK activity using machine learning Polo-like kinases (PLKs) are a family of serine/threonine protein kinases involved in multiple functions in eukaryotic cell division. There are 5 members of this family identified in humans (PLK1-PLK5). Given their proven and potential roles as drug targets, there is interest in identifying molecules that selectively inhibit specific members of this family. This project will consider the data available for each member, that is the set of molecules that has already been tested on each of them and investigate models able to predict their activity. These models will be constructed using machine-learning approaches, which constitute a form of artificial intelligence that can ameliorate virtual screening performance through a process of automatic and direct learning from the data. Following intensive validation and prospective application of make-on-demand compound libraries, our collaborators will test in vitro selected molecules to have potent and selective PLK activity. 
Virtual screening for molecules with ATM kinase activity by allying artificial intelligence and target-specific data augmentation Discovering drug leads and optimising their potency for a target is an expensive, time-consuming and particularly challenging process. Predictive models are therefore needed to help researchers bridge this translation gap by reducing the experimental efforts required or even making it possible, to achieve optimised drug leads for a given target.

Docking is a computational technique providing relatively fast predictions of whether and how a molecule binds to an atomic-resolution structure of the target. Very recently, the exploitation of a novel technology generating billions of make-on-demand molecules by classical docking tools has directly achieved a range of diverse and potent drug leads for several targets. Therefore, no lengthy and costly potency optimisation was subsequently required, thereby strongly reducing the time and cost of providing these advanced compounds (https://www.nature.com/articles/d41586-019-00145-6).

However, the modest predictive performance of these classical tools is extensively documented. This means that their application to many other targets is likely to be much worse than that in the few targets reported so far. It is now well-known that a way to boost docking performance in other targets is by enhancing it with Artificial Intelligence (AI). Unlike classical tools, AI models can exploit fast-growing datasets to learn to discriminate between molecules with or without potent activity for the target. AI models are furthermore likely to make drug design even faster and less expensive than the classical tools on those targets where the latter work well.

Here we will investigate optimal ways to build supervised learning models tailored to the ATM Kinase. We will compare the predictive accuracy of these models to that of existing models for any target, whether classical or AI-based, using the most rigorous retrospective assessment practices. We will also investigate to which extent coupling these models with ultrafast, yet less accurate, models can directly provide optimised drug leads in a fraction of the time. Once this retrospective study is completed, we will employ the most promising model to screen a library with billions of make-on-demand compounds. This will result in a selection of novel molecules with high predictive activity for this enzyme target.
A toxicity prediction toolbox based on the Therapeutic Data Commons benchmarks Drug leads can induce many forms of toxicity in humans, which ultimately can result in abandoning that lead molecule or even the therapeutic target all together. It is therefore important to have count with computational models able to predict each known toxicity endpoint for a given molecule. The Therapeutic Data Commons proposes a suite of toxicity benchmarks (https://tdcommons.ai/single_pred_tasks/tox/) along with leaderboards pointing out the most artificial intelligence (AI) models to date (e.g. https://tdcommons.ai/benchmark/admet_group/20herg/).

For each of these problems, the student will review the literature to introduce the problem and its most predictive model in a clear manner. The student will also evaluate the model with other performance metrics and apply it to other sets of molecules provided by the supervisor (thus, independency in running code such as python scripts is required).

This project is an opportunity to be introduce to an important drug discovery problem, the best AI models to tackle this problem and hone your programming skills.

References: https://www.nature.com/articles/s41589-022-01131-2   https://arxiv.org/abs/2102.09548 
Machine learning to predict the activities of molecules on cultures of pathogenic bacteria This methodology research project aims to investigate the development of machine learning models to predict how molecules inhibit pathogenic bacteria growth. The project will exploit recent antimicrobial and toxicity data in novel ways. The student will investigate which supervised learning algorithm leads to the most predictive models under realistic scenarios (distribution shifts). The best models will be employed to screen ultra­large compound libraries to identify those potential antibiotics unlikely to be toxic for humans. There will be opportunities to validate these predictions in vitro via existing collaborations.
Optimal design of virtual screening benchmarks from in vitro screening data Introduction:
Virtual screening (VS) has become an important source of small-molecule drug leads. A benchmark is needed to identify the VS method/s that will perform best prospectively for that therapeutic target. Benchmarks are also needed to find the optimal settings for the selected VS method/model. A VS benchmark is a library with two classes of molecules: those whose activity for the target is above a given threshold - actives as the positive class- and those with weaker or no activity at all - inactives as the negative class. Among them, a screened library is one whose molecules have been screened in the same centre and using same assay/s, e.g. the results of high-throughput screening (HTS) a compound library against the considered target.
Unfortunately, HTS data have been used for VS benchmarking in a convenient yet unrealistic manner (e.g. generating benchmarks with much smaller chemical diversity than HTS). The question is how useful are these HTS-derived datasets as VS benchmarks with respect to the ground truth represented by the unadulterated HTS datasets.
About the supervisor:
Dr Pedro Ballester has over 17 years of experience in this research area. His last papers in this area have shown the potential of Artificial Intelligence (AI) for structure-based drug design:
https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcms.1478
https://academic.oup.com/bib/article-abstract/22/3/bbaa095/5855396
Research plan:
The student will start by learning about these data types as well as existing VS benchmarks (e.g. MUV) and VS methods (e.g. USR, Smina). Then, s/he will be applying each VS method to rank the HTS-derived benchmark molecules in order to assess its performance on the associated target. This process will also be carried out for other VS methods, unadulterated HTS benchmarks and targets.
The results will be employed to investigate to which extent the filters used to select molecules for a VS benchmark make it unrealistic. This is crucial for the development and selection of VS methods.
About the candidate:
This project is suitable for a student who is keen to learn about molecular modelling in the context of early drug design. Python programming is required. Contact: p.ballester@imperial.ac.uk
Dr Sam Au

Profile: https://profiles.imperial.ac.uk/s.au 

Contact details: s.au@imperial.ac.uk

Project title Description
Multicancer Diagnosis of BRCA-associated Breast and Ovarian Cancer from Blood by Microfluidic Isolation of Extracellular Vesicles Extracellular vesicles (EVs) are nanometer-scale lipid bilayer sacs generated from cells containing genetic and protein payload. EVs play very important roles in cancer and metastasis including promoting the adhesion, migration, invasion and growth of metastastising tumour cells. EVs are commonly found in blood and therefore may  be very useful for minimally invasive early diagnosis of cancer.

EVs contain distinct surface markers and payload which may be used one day to determine what type of cancer an individual has and help select which therapeutic approach is best for an individual patient. To one day achieve a "pan-cancer" method of cancer diagnosis from blood, we wish to first explore a "multicancer" approach to cancer diagnosis. BRCA is a gene which drives a much higher risk of breast and ovarian cancer when women inherent mutations to this gene. Some women with BRCA mutations can have both breast and ovarian cancers at the same time. This makes it an excellent initial model for multicancer diagnosis because blood samples can be taken from the subset of women known to carry BRCA mutations for routine screening. The challenge however, is how to best isolate these small particles from the blood of cancer patients and then subsequently use these particles for simultaneous breast and ovarian cancer diagnosis? 

The goal of this ambitious project is to microfluidic device that a) isolates EVs from whole blood and b) determine if these EVs are derived from breast and/or ovarian cancer. While many methods of EV isolation are currently under investigation, an inertial based method that uses wall lift effects is particularly suited for the physical isolation of EVs in ways that do not modify their characteristics: https://www.science.org/doi/10.1126/sciadv.adi5296 . Isolated EVs will then be combined with techniques such as immunohistochemistry or mass spectrometry to classify EVs to move us towards a blood-based screen for the diagnosis of multiple cancer types at once.

Students will gain valuable skills in microfluidic device design, CAD, microfabrication, cancer cell culture, molecular biology and imaging. 
Professor Simon Schultz

Profile: https://profiles.imperial.ac.uk/s.schultz 

Contact details: s.schultz@imperial.ac.uk 

Project title Description
Understanding the effects of focused ultrasound stimulation (FUS) on human semantic cognition

Focused ultrasound stimulation (FUS) operates in the frequency range above human hearing. Tissue is affected by acoustic pressure waves that, for lower intensities, can modulate cell membrane features that affect the likelihood of neurons to fire. The ability to, depending on the stimulation protocol, either increase or decrease neural activity and to reach deep-brain structures has several potential clinical applications. The aim of the project is to develop and establish the combination of FUS and state-of-the-art multimodal imaging, towards an ultimate goal of developing cognitive enhancement technology for treating semantic impairments in aging and neurodegenerative disorders. You will learn how to use FUS safely to healthy participants and collect/analyse behavioural and neuroimaging data. Prior to conducting the experiment, you will receive training in the use of FUS by supervisors. This is a collaboration between the Schultz group and the groups of JeYoung Jung and Marcus Kaiser at the University of Nottingham, where the experiments will be performed, under the supervision of Dr Jung.  

 

The search for Sharp Wave Ripples (SWRs) in magnetoencephalographic recordings from human subjects

Sharp wave ripples (SWRs) are brief (~100 ms) periods of high-frequency (110-180 Hz) oscillations, observable as "ripples" in the local field potential, which begin in area CA3 of the hippocampus and propagate out through the cerebral cortex. They are the most synchronous non-pathological activity pattern in the mammalian brain. They are believed to play a key role in the consolidation of episodic and semantic memories from the hippocampus into the neocortex, as well as being involved in working memory. To date, no one has managed to detect SWRs in humans non-invasively - all human work has been in patients surgically implanted with electrodes. In this project we will attempt to overcome this barrier, by collecting some sample data from human subjects taking a nap in an OPM-MEG (optically pumped magnetometer magneto-encephalography) imaging system. The key aim of the project will be to establish proof of principle that this technology can be used to detect SWRs in human subjects.

 

Detecting sharp wave ripples (SWRs) in human epilepsy patients with medial temporal lobe electrode implants

Sharp wave ripples (SWRs) are brief (~100 ms) periods of high-frequency (110-180 Hz) oscillations, observable as "ripples" in the local field potential, which begin in area CA3 of the hippocampus and propagate out through the cerebral cortex. They are the most synchronous non-pathological activity pattern in the mammalian brain. They are believed to play a key role in the consolidation of episodic and semantic memories from the hippocampus into the neocortex, as well as being involved in working memory. In this project we will collaborate with neurosurgeon Antonio Valentin at the Institute of Psychiatry, KCL, to analyse data from human patients implanted with recording electrodes (for the purpose of treating epilepsy). Our aim will be to detect SWRs, and then to apply neural manifold analysis methods to investigate their functional and dynamical properties. This project will require good programming skills in Python or MATLAB, and a strong interest in neuroscience and its clinical applications.



Reference: A Liu et al (2022), Nature Communications. https://doi.org/10.1038/s41467-022-33536-x

Sebastian, E.R., Quintanilla, J.P., S√°nchez-Aguilera, A., Esparza, J., Cid, E. and de la Prida, L.M., 2023. Topological analysis of sharp-wave ripple waveforms reveals input mechanisms behind feature variations. Nature Neuroscience, pp.1-11.

 

Multi-scale network theoretic analysis of two-photon mesoscopic calcium brain imaging data We have been performing experiments in which we image wide areas of the cortex (1x5mm field of view) in mice performing cognitive and memory tasks. There is scope in the laboratory for one or more MRes students to work on analysis of these datasets using techniques from graph (network) theory and information theory.
Dr Sophie Morse

Profile: https://profiles.imperial.ac.uk/sophie.morse11 

Contact details: sophie.morse11@imperial.ac.uk

Project title Description
Non-invasive manipulation and imaging of the brain’s immune system Our brain has its own dedicated immune system and rapid response team: microglia. These cells actively survey the brain, clearing away toxins and pathogens. The ability to temporarily stimulate microglia has generated much excitement, due to its potential to treat brain diseases. For example, stimulating microglia can help clear away the amyloid-beta plaques that build up in Alzheimer’s disease. 

Focused ultrasound is a non-invasive and targeted technology that can stimulate microglia in any region of the brain. However, how ultrasound is stimulating these crucially important cells is unknown.

This project aims to visualise whether focused ultrasound stimulates PIEZO1 mechanically sensitive ion channels in microglia to better understand the mechanism of this stimulation (expertise in Dr Morse’s group). A genetically-encoded fluorescent reporter based on PIEZO1, GenEPi, developed in Dr Pantazis’s group, will be used to visualise whether ultrasound is stimulating these ion channels, that play multiple roles in the activation of microglia.

The student will design a setup to simultaneously image the activity of PIEZO1 with confocal microscopy while performing ultrasound stimulation, which will be tested in a microglial cell line.

These results will provide invaluable insight into the mechanism of how focused ultrasound stimulates microglia, allowing ultrasound treatments to be optimised to achieve improved beneficial therapeutic effects for the treatment of neurological disorders, such as Alzheimer’s disease. 
Can ultrasound help prevent organ transplant rejection?  Immune cells triggering inflammatory responses can lead to the rejection of organ transplants. Recently, ultrasound has been shown to have an anti-inflammatory effect on immune cells, such as macrophages. We here propose to investigate how ultrasound can be used best to lead to the release of anti-inflammatory cytokines. CD4+/CD8+ T cells, purified T-regulatory cells and macrophages will be cultured in vitro and multiplexed cytokine assays will be performed following ultrasound treatment. These findings will be translated and tested on organ transplants of hearts, livers and lungs currently being done at the Technical University of Munich (TUM). This project is in collaboration with Dr Konrad Fischer from TUM. Cell culture skills are preferable, any experience with cytokine assays desirable. 
Can focused ultrasound delay brain ageing?  Focused ultrasound is a technology that has very recently shown to restore cognitive function in Alzheimer's disease mice and patients. This is a non-invasive technology that can be focused onto specific regions of the brain. One theory is that this technology can restore cognition by stimulating the innate immune cells of the brain as well as neuronal function and health. In this project you will explore 1) whether this same technology can be used to delay age-related cognitive decline, as well as restore cognition in Alzheimer's disease, and 2) delve into exploring the mechanisms behind why these effects are observed. This will involve working with mouse brain tissue, sectioning, imaging, staining and fluorescence microscopy.  
Can focused ultrasound delay Alzheimer's disease?  Focused ultrasound is a technology that has very recently shown to restore cognitive function in Alzheimer's disease patients. This is a non-invasive technology that can be focused onto specific regions of the brain. One theory is that this technology can restore cognition by stimulating the innate immune cells of the brain as well as neuronal function and health. In this project you will explore 1) whether this same technology can be used to delay Alzheimer's as well as restore cognition and 2) delve into exploring the mechanisms behind why these effects are observed. This will involve working with mouse brain tissue, sectioning, imaging, staining and fluorescence microscopy.  
Dr Timothy Constandinou

Profile: https://profiles.imperial.ac.uk/t.constandinou 

Contact details: t.constandinou@imperial.ac.uk 

Project title Description
Pet health monitoring: evaluating the feasibility of contactless vital signs monitoring in small animals This research project aims to adapt contactless health sensors, originally developed for humans, for use in small animals. The student will evaluate the feasibility of modifying these vital signs monitoring devices to work effectively with cats and dogs. Working alongside veterinary research collaborators and our technical team at the Imperial Next Generation Neural Interfaces Lab, the student will conduct a small study to collect health data from these animals. The project involves adapting the sensors, gathering vital signs data, analysing these results, and comparing them against traditional veterinary monitoring methods. This research could potentially advance veterinary care technology by introducing more sophisticated, non-invasive monitoring tools for small animals.
Measuring criticality in mesoscopic recordings of cortical brain activity in mouse models There is evidence in the literature that the brain is a system that maintains itself in a specific operating state called the critical state. This state is defined as being at the boundary between ordered and chaotic behaviour, wherein the burst firing patterns of neurons follow a power-law relationship between the number of neurons involved and the length of the burst, but with no relationship to either time or space i.e. this behaviour is stochastic in nature and has no characteristic scale. It has been proposed that operating in the critical state is optimal for the brain in terms of information processing i.e. cognitive performance and that the maintenance of this state is the result of a deliberate and carefully calibrated process. Conversely in disease states it is expected that the brain deviates from this state towards the chaotic or ordered operating states, and that we could measure this difference. If this is true we could use criticality as a biomarker for cognitive degradation in disease states such as Alzheimer's Disease and Parkinson's Disease.
 
This project aims to analyse existing datasets of brain activity (obtained in collaboration from Barnes’ Lab at Imperial College) from mouse models obtained using mesoscopic calcium recordings. These recordings capture changes in activity over nearly all of the cortical surface, making it possible to identify power-law relationships in bursts of activity where present. Mouse models from which this data has been captured include both wild-type and Alzheimer’s Disease model mice, enabling comparative analysis of any changes in criticality in healthy and diseased brain.
 
Specific Hypotheses to be investigated in the project:
Healthy awake rodents show signatures of critical dynamics that can be measured using mesoscopic calcium recordings of brain activity. Test for these relationships in awake activity, identifying any spatial or temporal trends for power-law population-event size distributions and long-range temporal correlations.
Deviation from critical dynamics can occur in models of neurodegenerative conditions like Alzheimer’s Disease, which is associated with suboptimal information processing in the brain (and as can be measured in mouse models from behavioural assays as a control)
In mouse models, deviations from critical dynamics are associated with the presence of amyloid plaques
Additionally, the amount or severity of deviation is correlated with amyloid plaque load.
 
Recommended reading:
1. Why Brain Criticality Is Clinically Relevant: A Scoping Review. Zimmern V. Front Neural Circuits. 2020 Aug 26;14:54.
2. Long-range temporal correlations in the brain distinguish conscious wakefulness from induced unconsciousness. Thiery, T. et al. Neuroimage 179, 30–39 (2018).
3. Voltage imaging of waking mouse cortex reveals emergence of critical neuronal dynamics. G Scott, ED Fagerholm, et al. Journal of Neuroscience 34 (50), 16611-16620 (2014).
4. Cortical entropy, mutual information and scale-free dynamics in waking mice. ED Fagerholm, G Scott et al. Cerebral Cortex 26 (10), 3945-3952 (2016).
5. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. Shew, W. L., Yang, H., Petermann, T., Roy, R. & Plenz, D. J Neurosci 29, 15595–15600 (2009).
6. Criticality in the Healthy Brain. Shi, J. et al.. Frontiers in Network Physiology 1, (2021).
7. How critical is brain criticality? O’Byrne, J. & Jerbi, K. Trends in Neurosciences vol. 45 (2022).
8. Implantable brain machine interfaces: first-in-human studies, technology challenges and trends. A. Rapeaux and T.G. Constandinou. Current Opinion in Biotechnology, Vo. 72. (2021)
9. Multi-scale network imaging in a mouse model of amyloidosis. Doostdar, N., Airey, J., Radulescu, C.I., Melgosa-Ecenarro, L., Zabouri, N., Pavlidi, P., Kopanitsa, M., Saito, T., Saido, T., and Barnes, S.J.* Cell Calcium (2021) 102365. doi: 10.1016/j.ceca.2021.102365.
 
We are looking for a student with strong analytical and data analysis skills, ability to use the MATLAB language, understanding of computer modelling techniques (especially where relevant to neuroscience), knowledge of and experience using statistical analysis and tools. Experience using neuroscience-focused modelling toolsets such as NEURON (using the HOC language) is an asset. The student will be working with researchers at the South Kensington and White City campuses and be expected to work in both locations during the course of the project.

This project will be co-supervised by Dr Adrien Rapeaux.